62 research outputs found

    Information Extraction from Heterogeneous WWW Resources

    Get PDF
    The information available on the WWW is growing very fast. However, a fundamental problem with the information on the WWW is its lack of structure making its exploitation very difficult. As a result, the desired information is getting more difficult to retrieve and extract. To overcome this problem many tools and techniques are being developed and used for locating the web pages of interest and extracting the desired information from these pages. In this paper we present the first prototype of an Information Extraction (IE) system that attempts to extract information on different Computer Science related courses offered by British Universities

    A review of the generation of requirements specification in natural language using objects UML models and domain ontology

    Get PDF
    In the software development life cycle, requirements engineering is the main process that is derived from users by informal interviews written in natural language by requirements engineers (analysts). The requirements may suffer from incompleteness and ambiguity when transformed into formal or semi-formal models that are not well understood by stakeholders. Hence, the stakeholder cannot verify if the formal or semi-formal models satisfy their needs and requirements. Another problem faced by requirements is that when code and/or designs are updated, it is often the case that requirements and specifically the requirements document are not updated. Hence ending with a requirements document not reflecting the implemented software.Generating requirements from the design and/or implementation document is seen by many researchers as a way to address the latter issue. This paper presents a survey of some works undertaken in the field of generation natural language specifications from object UML model using the support of an ontology. and analyzing the robustness and limitations of these existing approaches. This includes studying the generation of natural language from a formal model, review the generation of natural language from ontologies, and finally reviews studies about check to generate natural language from OntoUML.N/

    Cash flow and the management of revenue and risk case study of tebessa cement corporation SCT

    Get PDF
     تهدف هذه الدراسة إلى تسليط الضوء حول الدور الذي تلعبه إدارة التدفقات النقدية  في الموازنة بين تحقيق أعلى مستويات المردودية و تدنية حجم الأخطار المصاحبة لها، من خلال تحليل و قياس كل من المردودية و الخطر و التركيز على اجراءات رفع كفاءة إدارة التدفقات النقدية، و من ثم اسقاط هذه الدراسة على الواقع العملي لمؤسسة اسمنت تبسة. و قد تم التوصل من خلال هذه الدراسة إلى أنه يمكن للمؤسسة تحقيق أعلى مستويات المردودية و تدنية الأخطار من خلال التسيير الجيد لعناصر الاحتياج في رأس المال العامل، تمويل العجز بأفضل الشروط الممكنة، إضافة إلى احتفاظ المؤسسة بحجم أمثل من النقدية و الذي يمكنها من عدم إبقاء فوائض عاطلة تؤدي إلى تكاليف خفية في شكل فرص ضائعة.This study aims at shedding light on the role played by the cash flow management in the balance between achieving the highest levels of return and minimizing the risks associated with it through analyzing and measuring both the cost and the risk, and then to drop this study on the practical reality of the Tebessa Cement Foundation. The study found that the institution can achieve the highest levels of profitability and minimize risks through good management of the elements of the need in working capital, financing deficit under the best possible conditions, in addition to keeping the institution an optimal amount of cash, which can't keep Excess surpluses lead to hidden costs in the form of lost opportunitie

    MRI brain classification using the quantum entropy LBP and deep-learning-based features

    Get PDF
    Brain tumor detection at early stages can increase the chances of the patient’s recovery after treatment. In the last decade, we have noticed a substantial development in the medical imaging technologies, and they are now becoming an integral part in the diagnosis and treatment processes. In this study, we generalize the concept of entropy di erence defined in terms of Marsaglia formula (usually used to describe two di erent figures, statues, etc.) by using the quantum calculus. Then we employ the result to extend the local binary patterns (LBP) to get the quantum entropy LBP (QELBP). The proposed study consists of two approaches of features extractions of MRI brain scans, namely, the QELBP and the deep learning DL features. The classification of MRI brain scan is improved by exploiting the excellent performance of the QELBP–DL feature extraction of the brain in MRI brain scans. The combining all of the extracted features increase the classification accuracy of long short-term memory network when using it as the brain tumor classifier. The maximum accuracy achieved for classifying a dataset comprising 154 MRI brain scan is 98.80%. The experimental results demonstrate that combining the extracted features improves the performance of MRI brain tumor classification.N/

    Segmentation of brain tumors in MRI images using three-dimensional active contour without edge

    Get PDF
    Brain tumor segmentation in magnetic resonance imaging (MRI) is considered a complex procedure because of the variability of tumor shapes and the complexity of determining the tumor location, size, and texture. Manual tumor segmentation is a time-consuming task highly prone to human error. Hence, this study proposes an automated method that can identify tumor slices and segment the tumor across all image slices in volumetric MRI brain scans. First, a set of algorithms in the pre-processing stage is used to clean and standardize the collected data. A modified gray-level co-occurrence matrix and Analysis of Variance (ANOVA) are employed for feature extraction and feature selection, respectively. A multi-layer perceptron neural network is adopted as a classifier, and a bounding 3D-box-based genetic algorithm is used to identify the location of pathological tissues in the MRI slices. Finally, the 3D active contour without edge is applied to segment the brain tumors in volumetric MRI scans. The experimental dataset consists of 165 patient images collected from the MRI Unit of Al-Kadhimiya Teaching Hospital in Iraq. Results of the tumor segmentation achieved an accuracy of 89% +/- 4.7% compared with manual processes

    Generating natural language specifications from UML class diagrams

    Get PDF
    Early phases of software development are known to be problematic, difficult to manage and errors occurring during these phases are expensive to correct. Many systems have been developed to aid the transition from informal Natural Language requirements to semistructured or formal specifications. Furthermore, consistency checking is seen by many software engineers as the solution to reduce the number of errors occurring during the software development life cycle and allow early verification and validation of software systems. However, this is confined to the models developed during analysis and design and fails to include the early Natural Language requirements. This excludes proper user involvement and creates a gap between the original requirements and the updated and modified models and implementations of the system. To improve this process, we propose a system that generates Natural Language specifications from UML class diagrams. We first investigate the variation of the input language used in naming the components of a class diagram based on the study of a large number of examples from the literature and then develop rules for removing ambiguities in the subset of Natural Language used within UML. We use WordNet,a linguistic ontology, to disambiguate the lexical structures of the UML string names and generate semantically sound sentences. Our system is developed in Java and is tested on an independent though academic case study

    Applying Semantic Parsing to Question Answering Over Linked Data: Addressing the Lexical Gap

    Get PDF
    Hakimov S, Unger C, Walter S, Cimiano P. Applying Semantic Parsing to Question Answering Over Linked Data: Addressing the Lexical Gap. In: Biemann C, Handschuh S, Freitas A, Meziane F, Metais E, eds. Natural Language Processing and Information Systems: 20th International Conference on Applications of Natural Language to Information Systems, NLDB 2015, Passau, Germany, June 17-19, 2015, Proceedings. LNCS. Vol 9103. Springer International Publishing; 2015: 103-109.Question answering over linked data has emerged in the past years as an important topic of research in order to provide natural language access to a growing body of linked open data on the Web. In this paper we focus on analyzing the lexical gap that arises as a challenge for any such question answering system. The lexical gap refers to the mismatch between the vocabulary used in a user question and the vocabulary used in the relevant dataset. We implement a semantic parsing approach and evaluate it on the QALD-4 benchmark, showing that the performance of such an approach suffers from training data sparseness. Its performance can, however, be substantially improved if the right lexical knowledge is available. To show this, we model a set of lexical entries by hand to quantify the number of entries that would be needed. Further, we analyze if a state-of-the-art tool for inducing ontology lexica from corpora can derive these lexical entries automatically. We conclude that further research and investments are needed to derive such lexical knowledge automatically or semi-automatically
    corecore